Ans: Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. A high-throughput distributed messaging system. Kafka is a general-purpose publish-subscribe model messaging system, which offers strong durability, scalability, and fault-tolerance support. It is not specifically designed for Hadoop. Hadoop ecosystem is just one of its possible consumers.
Ans: Bolt:- Bolts represent the processing logic unit in Storm. One can utilize bolts to do any kind of processing such as filtering, aggregating, joining, interacting with data stores, talking to external systems, etc. Bolts can also emit tuples (data messages) for the subsequent bolts to process. Additionally, bolts are responsible to acknowledge the processing of tuples after they are done processing.
Spout:- Spouts represent the source of data in Storm. You can write spouts to read data from data sources such as databases, distributed file systems, messaging frameworks, etc. Spouts can broadly be classified into the following –
-Reliable – These spouts have the capability to replay the tuples (a unit of data in the data stream). This helps applications achieve ‘at least once message processing’ semantic as in case of failures, tuples can be replayed and processed again. Spouts for fetching the data from messaging frameworks are generally reliable as these frameworks provide the mechanism to replay the messages.
-Unreliable – These spouts don’t have the capability to replay the tuples. Once a tuple is emitted, it cannot be replayed irrespective of whether it was processed successfully or not. This type of spouts follows ‘at most once message processing’ semantic.
Tuple:- The tuple is the main data structure in Storm. A tuple is a named list of values, where each value can be any type. Tuples are dynamically typed — the types of fields do not need to be declared. Tuples have helper methods like getting integer and getString to get field values without having to cast the result. Storm needs to know how to serialize all the values in a tuple. By default, Storm knows how to serialize the primitive types, strings, and byte arrays. If you want to use another type, you’ll need to implement and register a serializer for that type.
Learn Apache Kafka by Tekslate - Fastest growing sector in the industry. Explore Online "Apache Kafka Training" and course is aligned with industry needs & developed by industry veterans. Tekslate will turn you into an Apache Kafka Expert.
Ans: Yes, It acts as a proxy also by using the mod_proxy module. This module implements a proxy, gateway or cache for Apache. It implements proxying capability for AJP13 (Apache JServ Protocol version 1.3), FTP, CONNECT (for SSL),HTTP/0.9, HTTP/1.0, and (since Apache 1.3.23) HTTP/1.1. The module can be configured to connect to other proxy modules for these and other protocols.
Ans: The first two are remnants from the NCSA times, and generally you should be ok if you delete the first two, and stick with httpd.conf.
Ans: ZeroMQ is “a library which extends the standard socket interfaces with features traditionally provided by specialized messaging middleware products”. Storm relies on ZeroMQ primarily for task-to-task communication in running Storm topologies.
Ans: There are three distinct layers to Storm’s codebase.
-First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.
-Second, all of Storm’s interfaces are specified as Java interfaces. So even though there’s a lot of Clojure in Storm’s implementation, all usage must go through the Java API. This means that every feature of Storm is always available via Java.
-Third, Storm’s implementation is largely in Clojure. Line-wise, Storm is about half Java code, half Clojure code. But Clojure is much more expressive, so in reality, the great majority of the implementation logic is in Clojure.
Ans: The cleanup method is called when a Bolt is being shutdown and should clean up any resources that were opened. There’s no guarantee that this method will be called on the cluster: For instance, if the machine the task is running on blows up, there’s no way to invoke the method. The cleanup method is intended when you run topologies in local mode (where a Storm cluster is simulated in the process), and you want to be able to run and kill many topologies without suffering any resource leaks.
Ans: To kill a topology, simply run:
storm kill {stormname}
Give the same name to storm kill as you used when submitting the topology. The storm won’t kill the topology immediately. Instead, it deactivates all the spouts so that they don’t emit any more tuples, and then Storm waits for Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS seconds before destroying all the workers. This gives the topology enough time to complete any tuples it was processing when it got killed.
Check Out Apache Kafka Tutorials
Ans: A Combiner Aggregator is used to combine a set of tuples into a single field. It has the following signature:
public interface CombinerAggregator {
T init (TridentTuple tuple);
T combine(T val1, T val2);
T zero();
}
Storm calls the init() method with each tuple, and then repeatedly calls the combine()method until the partition is processed. The values passed into the combine() method are partial aggregations, the result of combining the values returned by calls to init().
Ans: Yes, to update a running topology, the only option currently is to kill the current topology and resubmit a new one. A planned feature is to implement a Storm swap command that swaps a running topology with a new one, ensuring minimal downtime and no chance of both topologies processing tuples at the same time.
Ans: Java applications are not stored in Apache, it can be only connected to the other Java webapp hosting webserver using the mod_jk connector.
Ans: This module creates dynamically configured virtual hosts, by allowing the IP address and/or the Host: header of the HTTP request to be used as part of the pathname to determine what files to serve. This allows for easy use of a huge number of virtual hosts with similar configurations.
Ans: A strut is an open-source framework for creating Java web applications.
Ans: No.root process opens port 80, but never listens to it, so no user will actually enter the site with root rights. If you kill the root process, you will see the other kids disappear as well.
You liked the article?
Like: 0
Vote for difficulty
Current difficulty (Avg): Medium
TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills in the market.